Papers
Topics
Authors
Recent
Search
2000 character limit reached

M-Tuning: Prompt Tuning with Mitigated Label Bias in Open-Set Scenarios

Published 9 Mar 2023 in cs.CV | (2303.05122v3)

Abstract: In realistic open-set scenarios where labels of a part of testing data are totally unknown, when vision-language (VL) prompt learning methods encounter inputs related to unknown classes (i.e., not seen during training), they always predict them as one of the training classes. The exhibited label bias causes difficulty in open set recognition (OSR), in which an image should be correctly predicted as one of the known classes or the unknown one. To achieve this goal, we propose a vision-language prompt tuning method with mitigated label bias (M-Tuning). It introduces open words from the WordNet to extend the range of words forming the prompt texts from only closed-set label words to more, and thus prompts are tuned in a simulated open-set scenario. Besides, inspired by the observation that classifying directly on large datasets causes a much higher false positive rate than on small datasets, we propose a Combinatorial Tuning and Testing (CTT) strategy for improving performance. CTT decomposes M-Tuning on large datasets as multiple independent group-wise tuning on fewer classes, then makes accurate and comprehensive predictions by selecting the optimal sub-prompt. Finally, given the lack of VL-based OSR baselines in the literature, especially for prompt methods, we contribute new baselines for fair comparisons. Our method achieves the best performance on datasets with various scales, and extensive ablation studies also validate its effectiveness.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (57)
  1. Scheirer, W.J., Rezende Rocha, A., Sapkota, A., Boult, T.E.: Toward open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. 35(7), 1757–1772 (2012) Bendale and Boult [2016] Bendale, A., Boult, T.E.: Towards open set deep networks. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1563–1572 (2016) Oza and Patel [2019] Oza, P., Patel, V.M.: C2ae: Class conditioned auto-encoder for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2307–2316 (2019) Miller et al. [2021] Miller, D., Sunderhauf, N., Milford, M., Dayoub, F.: Class anchor clustering: A loss for distance-based open set recognition. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3570–3578 (2021) Kolesnikov et al. [2020] Kolesnikov, A., Beyer, L., Zhai, X., Puigcerver, J., Yung, J., Gelly, S., Houlsby, N.: Big transfer (bit): General visual representation learning. In: Eur. Conf. Comput. Vis., pp. 491–507 (2020). Springer Sun et al. [2021] Sun, X., Ding, H., Zhang, C., Lin, G., Ling, K.-V.: M2iosr: Maximal mutual information open set recognition. arXiv preprint arXiv:2108.02373 (2021) Neal et al. [2018] Neal, L., Olson, M., Fern, X., Wong, W.-K., Li, F.: Open set learning with counterfactual images. In: Eur. Conf. Comput. Vis., pp. 613–628 (2018) Yu et al. [2017] Yu, Y., Qu, W.-Y., Li, N., Guo, Z.: Open-category classification by adversarial sample generation. In: IJCAI, pp. 3357–3363 (2017) Zhou et al. [2021] Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Bendale, A., Boult, T.E.: Towards open set deep networks. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1563–1572 (2016) Oza and Patel [2019] Oza, P., Patel, V.M.: C2ae: Class conditioned auto-encoder for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2307–2316 (2019) Miller et al. [2021] Miller, D., Sunderhauf, N., Milford, M., Dayoub, F.: Class anchor clustering: A loss for distance-based open set recognition. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3570–3578 (2021) Kolesnikov et al. [2020] Kolesnikov, A., Beyer, L., Zhai, X., Puigcerver, J., Yung, J., Gelly, S., Houlsby, N.: Big transfer (bit): General visual representation learning. In: Eur. Conf. Comput. Vis., pp. 491–507 (2020). Springer Sun et al. [2021] Sun, X., Ding, H., Zhang, C., Lin, G., Ling, K.-V.: M2iosr: Maximal mutual information open set recognition. arXiv preprint arXiv:2108.02373 (2021) Neal et al. [2018] Neal, L., Olson, M., Fern, X., Wong, W.-K., Li, F.: Open set learning with counterfactual images. In: Eur. Conf. Comput. Vis., pp. 613–628 (2018) Yu et al. [2017] Yu, Y., Qu, W.-Y., Li, N., Guo, Z.: Open-category classification by adversarial sample generation. In: IJCAI, pp. 3357–3363 (2017) Zhou et al. [2021] Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Oza, P., Patel, V.M.: C2ae: Class conditioned auto-encoder for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2307–2316 (2019) Miller et al. [2021] Miller, D., Sunderhauf, N., Milford, M., Dayoub, F.: Class anchor clustering: A loss for distance-based open set recognition. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3570–3578 (2021) Kolesnikov et al. [2020] Kolesnikov, A., Beyer, L., Zhai, X., Puigcerver, J., Yung, J., Gelly, S., Houlsby, N.: Big transfer (bit): General visual representation learning. In: Eur. Conf. Comput. Vis., pp. 491–507 (2020). Springer Sun et al. [2021] Sun, X., Ding, H., Zhang, C., Lin, G., Ling, K.-V.: M2iosr: Maximal mutual information open set recognition. arXiv preprint arXiv:2108.02373 (2021) Neal et al. [2018] Neal, L., Olson, M., Fern, X., Wong, W.-K., Li, F.: Open set learning with counterfactual images. In: Eur. Conf. Comput. Vis., pp. 613–628 (2018) Yu et al. [2017] Yu, Y., Qu, W.-Y., Li, N., Guo, Z.: Open-category classification by adversarial sample generation. In: IJCAI, pp. 3357–3363 (2017) Zhou et al. [2021] Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Miller, D., Sunderhauf, N., Milford, M., Dayoub, F.: Class anchor clustering: A loss for distance-based open set recognition. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3570–3578 (2021) Kolesnikov et al. [2020] Kolesnikov, A., Beyer, L., Zhai, X., Puigcerver, J., Yung, J., Gelly, S., Houlsby, N.: Big transfer (bit): General visual representation learning. In: Eur. Conf. Comput. Vis., pp. 491–507 (2020). Springer Sun et al. [2021] Sun, X., Ding, H., Zhang, C., Lin, G., Ling, K.-V.: M2iosr: Maximal mutual information open set recognition. arXiv preprint arXiv:2108.02373 (2021) Neal et al. [2018] Neal, L., Olson, M., Fern, X., Wong, W.-K., Li, F.: Open set learning with counterfactual images. In: Eur. Conf. Comput. Vis., pp. 613–628 (2018) Yu et al. [2017] Yu, Y., Qu, W.-Y., Li, N., Guo, Z.: Open-category classification by adversarial sample generation. In: IJCAI, pp. 3357–3363 (2017) Zhou et al. [2021] Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Kolesnikov, A., Beyer, L., Zhai, X., Puigcerver, J., Yung, J., Gelly, S., Houlsby, N.: Big transfer (bit): General visual representation learning. In: Eur. Conf. Comput. Vis., pp. 491–507 (2020). Springer Sun et al. [2021] Sun, X., Ding, H., Zhang, C., Lin, G., Ling, K.-V.: M2iosr: Maximal mutual information open set recognition. arXiv preprint arXiv:2108.02373 (2021) Neal et al. [2018] Neal, L., Olson, M., Fern, X., Wong, W.-K., Li, F.: Open set learning with counterfactual images. In: Eur. Conf. Comput. Vis., pp. 613–628 (2018) Yu et al. [2017] Yu, Y., Qu, W.-Y., Li, N., Guo, Z.: Open-category classification by adversarial sample generation. In: IJCAI, pp. 3357–3363 (2017) Zhou et al. [2021] Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Ding, H., Zhang, C., Lin, G., Ling, K.-V.: M2iosr: Maximal mutual information open set recognition. arXiv preprint arXiv:2108.02373 (2021) Neal et al. [2018] Neal, L., Olson, M., Fern, X., Wong, W.-K., Li, F.: Open set learning with counterfactual images. In: Eur. Conf. Comput. Vis., pp. 613–628 (2018) Yu et al. [2017] Yu, Y., Qu, W.-Y., Li, N., Guo, Z.: Open-category classification by adversarial sample generation. In: IJCAI, pp. 3357–3363 (2017) Zhou et al. [2021] Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Neal, L., Olson, M., Fern, X., Wong, W.-K., Li, F.: Open set learning with counterfactual images. In: Eur. Conf. Comput. Vis., pp. 613–628 (2018) Yu et al. [2017] Yu, Y., Qu, W.-Y., Li, N., Guo, Z.: Open-category classification by adversarial sample generation. In: IJCAI, pp. 3357–3363 (2017) Zhou et al. [2021] Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yu, Y., Qu, W.-Y., Li, N., Guo, Z.: Open-category classification by adversarial sample generation. In: IJCAI, pp. 3357–3363 (2017) Zhou et al. [2021] Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  2. Bendale, A., Boult, T.E.: Towards open set deep networks. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1563–1572 (2016) Oza and Patel [2019] Oza, P., Patel, V.M.: C2ae: Class conditioned auto-encoder for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2307–2316 (2019) Miller et al. [2021] Miller, D., Sunderhauf, N., Milford, M., Dayoub, F.: Class anchor clustering: A loss for distance-based open set recognition. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3570–3578 (2021) Kolesnikov et al. [2020] Kolesnikov, A., Beyer, L., Zhai, X., Puigcerver, J., Yung, J., Gelly, S., Houlsby, N.: Big transfer (bit): General visual representation learning. In: Eur. Conf. Comput. Vis., pp. 491–507 (2020). Springer Sun et al. [2021] Sun, X., Ding, H., Zhang, C., Lin, G., Ling, K.-V.: M2iosr: Maximal mutual information open set recognition. arXiv preprint arXiv:2108.02373 (2021) Neal et al. [2018] Neal, L., Olson, M., Fern, X., Wong, W.-K., Li, F.: Open set learning with counterfactual images. In: Eur. Conf. Comput. Vis., pp. 613–628 (2018) Yu et al. [2017] Yu, Y., Qu, W.-Y., Li, N., Guo, Z.: Open-category classification by adversarial sample generation. In: IJCAI, pp. 3357–3363 (2017) Zhou et al. [2021] Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Oza, P., Patel, V.M.: C2ae: Class conditioned auto-encoder for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2307–2316 (2019) Miller et al. [2021] Miller, D., Sunderhauf, N., Milford, M., Dayoub, F.: Class anchor clustering: A loss for distance-based open set recognition. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3570–3578 (2021) Kolesnikov et al. [2020] Kolesnikov, A., Beyer, L., Zhai, X., Puigcerver, J., Yung, J., Gelly, S., Houlsby, N.: Big transfer (bit): General visual representation learning. In: Eur. Conf. Comput. Vis., pp. 491–507 (2020). Springer Sun et al. [2021] Sun, X., Ding, H., Zhang, C., Lin, G., Ling, K.-V.: M2iosr: Maximal mutual information open set recognition. arXiv preprint arXiv:2108.02373 (2021) Neal et al. [2018] Neal, L., Olson, M., Fern, X., Wong, W.-K., Li, F.: Open set learning with counterfactual images. In: Eur. Conf. Comput. Vis., pp. 613–628 (2018) Yu et al. [2017] Yu, Y., Qu, W.-Y., Li, N., Guo, Z.: Open-category classification by adversarial sample generation. In: IJCAI, pp. 3357–3363 (2017) Zhou et al. [2021] Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Miller, D., Sunderhauf, N., Milford, M., Dayoub, F.: Class anchor clustering: A loss for distance-based open set recognition. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3570–3578 (2021) Kolesnikov et al. [2020] Kolesnikov, A., Beyer, L., Zhai, X., Puigcerver, J., Yung, J., Gelly, S., Houlsby, N.: Big transfer (bit): General visual representation learning. In: Eur. Conf. Comput. Vis., pp. 491–507 (2020). Springer Sun et al. [2021] Sun, X., Ding, H., Zhang, C., Lin, G., Ling, K.-V.: M2iosr: Maximal mutual information open set recognition. arXiv preprint arXiv:2108.02373 (2021) Neal et al. [2018] Neal, L., Olson, M., Fern, X., Wong, W.-K., Li, F.: Open set learning with counterfactual images. In: Eur. Conf. Comput. Vis., pp. 613–628 (2018) Yu et al. [2017] Yu, Y., Qu, W.-Y., Li, N., Guo, Z.: Open-category classification by adversarial sample generation. In: IJCAI, pp. 3357–3363 (2017) Zhou et al. [2021] Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Kolesnikov, A., Beyer, L., Zhai, X., Puigcerver, J., Yung, J., Gelly, S., Houlsby, N.: Big transfer (bit): General visual representation learning. In: Eur. Conf. Comput. Vis., pp. 491–507 (2020). Springer Sun et al. [2021] Sun, X., Ding, H., Zhang, C., Lin, G., Ling, K.-V.: M2iosr: Maximal mutual information open set recognition. arXiv preprint arXiv:2108.02373 (2021) Neal et al. [2018] Neal, L., Olson, M., Fern, X., Wong, W.-K., Li, F.: Open set learning with counterfactual images. In: Eur. Conf. Comput. Vis., pp. 613–628 (2018) Yu et al. [2017] Yu, Y., Qu, W.-Y., Li, N., Guo, Z.: Open-category classification by adversarial sample generation. In: IJCAI, pp. 3357–3363 (2017) Zhou et al. [2021] Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Ding, H., Zhang, C., Lin, G., Ling, K.-V.: M2iosr: Maximal mutual information open set recognition. arXiv preprint arXiv:2108.02373 (2021) Neal et al. [2018] Neal, L., Olson, M., Fern, X., Wong, W.-K., Li, F.: Open set learning with counterfactual images. In: Eur. Conf. Comput. Vis., pp. 613–628 (2018) Yu et al. [2017] Yu, Y., Qu, W.-Y., Li, N., Guo, Z.: Open-category classification by adversarial sample generation. In: IJCAI, pp. 3357–3363 (2017) Zhou et al. [2021] Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Neal, L., Olson, M., Fern, X., Wong, W.-K., Li, F.: Open set learning with counterfactual images. In: Eur. Conf. Comput. Vis., pp. 613–628 (2018) Yu et al. [2017] Yu, Y., Qu, W.-Y., Li, N., Guo, Z.: Open-category classification by adversarial sample generation. In: IJCAI, pp. 3357–3363 (2017) Zhou et al. [2021] Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yu, Y., Qu, W.-Y., Li, N., Guo, Z.: Open-category classification by adversarial sample generation. In: IJCAI, pp. 3357–3363 (2017) Zhou et al. [2021] Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  3. Oza, P., Patel, V.M.: C2ae: Class conditioned auto-encoder for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2307–2316 (2019) Miller et al. [2021] Miller, D., Sunderhauf, N., Milford, M., Dayoub, F.: Class anchor clustering: A loss for distance-based open set recognition. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3570–3578 (2021) Kolesnikov et al. [2020] Kolesnikov, A., Beyer, L., Zhai, X., Puigcerver, J., Yung, J., Gelly, S., Houlsby, N.: Big transfer (bit): General visual representation learning. In: Eur. Conf. Comput. Vis., pp. 491–507 (2020). Springer Sun et al. [2021] Sun, X., Ding, H., Zhang, C., Lin, G., Ling, K.-V.: M2iosr: Maximal mutual information open set recognition. arXiv preprint arXiv:2108.02373 (2021) Neal et al. [2018] Neal, L., Olson, M., Fern, X., Wong, W.-K., Li, F.: Open set learning with counterfactual images. In: Eur. Conf. Comput. Vis., pp. 613–628 (2018) Yu et al. [2017] Yu, Y., Qu, W.-Y., Li, N., Guo, Z.: Open-category classification by adversarial sample generation. In: IJCAI, pp. 3357–3363 (2017) Zhou et al. [2021] Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Miller, D., Sunderhauf, N., Milford, M., Dayoub, F.: Class anchor clustering: A loss for distance-based open set recognition. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3570–3578 (2021) Kolesnikov et al. [2020] Kolesnikov, A., Beyer, L., Zhai, X., Puigcerver, J., Yung, J., Gelly, S., Houlsby, N.: Big transfer (bit): General visual representation learning. In: Eur. Conf. Comput. Vis., pp. 491–507 (2020). Springer Sun et al. [2021] Sun, X., Ding, H., Zhang, C., Lin, G., Ling, K.-V.: M2iosr: Maximal mutual information open set recognition. arXiv preprint arXiv:2108.02373 (2021) Neal et al. [2018] Neal, L., Olson, M., Fern, X., Wong, W.-K., Li, F.: Open set learning with counterfactual images. In: Eur. Conf. Comput. Vis., pp. 613–628 (2018) Yu et al. [2017] Yu, Y., Qu, W.-Y., Li, N., Guo, Z.: Open-category classification by adversarial sample generation. In: IJCAI, pp. 3357–3363 (2017) Zhou et al. [2021] Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Kolesnikov, A., Beyer, L., Zhai, X., Puigcerver, J., Yung, J., Gelly, S., Houlsby, N.: Big transfer (bit): General visual representation learning. In: Eur. Conf. Comput. Vis., pp. 491–507 (2020). Springer Sun et al. [2021] Sun, X., Ding, H., Zhang, C., Lin, G., Ling, K.-V.: M2iosr: Maximal mutual information open set recognition. arXiv preprint arXiv:2108.02373 (2021) Neal et al. [2018] Neal, L., Olson, M., Fern, X., Wong, W.-K., Li, F.: Open set learning with counterfactual images. In: Eur. Conf. Comput. Vis., pp. 613–628 (2018) Yu et al. [2017] Yu, Y., Qu, W.-Y., Li, N., Guo, Z.: Open-category classification by adversarial sample generation. In: IJCAI, pp. 3357–3363 (2017) Zhou et al. [2021] Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Ding, H., Zhang, C., Lin, G., Ling, K.-V.: M2iosr: Maximal mutual information open set recognition. arXiv preprint arXiv:2108.02373 (2021) Neal et al. [2018] Neal, L., Olson, M., Fern, X., Wong, W.-K., Li, F.: Open set learning with counterfactual images. In: Eur. Conf. Comput. Vis., pp. 613–628 (2018) Yu et al. [2017] Yu, Y., Qu, W.-Y., Li, N., Guo, Z.: Open-category classification by adversarial sample generation. In: IJCAI, pp. 3357–3363 (2017) Zhou et al. [2021] Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Neal, L., Olson, M., Fern, X., Wong, W.-K., Li, F.: Open set learning with counterfactual images. In: Eur. Conf. Comput. Vis., pp. 613–628 (2018) Yu et al. [2017] Yu, Y., Qu, W.-Y., Li, N., Guo, Z.: Open-category classification by adversarial sample generation. In: IJCAI, pp. 3357–3363 (2017) Zhou et al. [2021] Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yu, Y., Qu, W.-Y., Li, N., Guo, Z.: Open-category classification by adversarial sample generation. In: IJCAI, pp. 3357–3363 (2017) Zhou et al. [2021] Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  4. Miller, D., Sunderhauf, N., Milford, M., Dayoub, F.: Class anchor clustering: A loss for distance-based open set recognition. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3570–3578 (2021) Kolesnikov et al. [2020] Kolesnikov, A., Beyer, L., Zhai, X., Puigcerver, J., Yung, J., Gelly, S., Houlsby, N.: Big transfer (bit): General visual representation learning. In: Eur. Conf. Comput. Vis., pp. 491–507 (2020). Springer Sun et al. [2021] Sun, X., Ding, H., Zhang, C., Lin, G., Ling, K.-V.: M2iosr: Maximal mutual information open set recognition. arXiv preprint arXiv:2108.02373 (2021) Neal et al. [2018] Neal, L., Olson, M., Fern, X., Wong, W.-K., Li, F.: Open set learning with counterfactual images. In: Eur. Conf. Comput. Vis., pp. 613–628 (2018) Yu et al. [2017] Yu, Y., Qu, W.-Y., Li, N., Guo, Z.: Open-category classification by adversarial sample generation. In: IJCAI, pp. 3357–3363 (2017) Zhou et al. [2021] Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Kolesnikov, A., Beyer, L., Zhai, X., Puigcerver, J., Yung, J., Gelly, S., Houlsby, N.: Big transfer (bit): General visual representation learning. In: Eur. Conf. Comput. Vis., pp. 491–507 (2020). Springer Sun et al. [2021] Sun, X., Ding, H., Zhang, C., Lin, G., Ling, K.-V.: M2iosr: Maximal mutual information open set recognition. arXiv preprint arXiv:2108.02373 (2021) Neal et al. [2018] Neal, L., Olson, M., Fern, X., Wong, W.-K., Li, F.: Open set learning with counterfactual images. In: Eur. Conf. Comput. Vis., pp. 613–628 (2018) Yu et al. [2017] Yu, Y., Qu, W.-Y., Li, N., Guo, Z.: Open-category classification by adversarial sample generation. In: IJCAI, pp. 3357–3363 (2017) Zhou et al. [2021] Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Ding, H., Zhang, C., Lin, G., Ling, K.-V.: M2iosr: Maximal mutual information open set recognition. arXiv preprint arXiv:2108.02373 (2021) Neal et al. [2018] Neal, L., Olson, M., Fern, X., Wong, W.-K., Li, F.: Open set learning with counterfactual images. In: Eur. Conf. Comput. Vis., pp. 613–628 (2018) Yu et al. [2017] Yu, Y., Qu, W.-Y., Li, N., Guo, Z.: Open-category classification by adversarial sample generation. In: IJCAI, pp. 3357–3363 (2017) Zhou et al. [2021] Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Neal, L., Olson, M., Fern, X., Wong, W.-K., Li, F.: Open set learning with counterfactual images. In: Eur. Conf. Comput. Vis., pp. 613–628 (2018) Yu et al. [2017] Yu, Y., Qu, W.-Y., Li, N., Guo, Z.: Open-category classification by adversarial sample generation. In: IJCAI, pp. 3357–3363 (2017) Zhou et al. [2021] Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yu, Y., Qu, W.-Y., Li, N., Guo, Z.: Open-category classification by adversarial sample generation. In: IJCAI, pp. 3357–3363 (2017) Zhou et al. [2021] Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  5. Kolesnikov, A., Beyer, L., Zhai, X., Puigcerver, J., Yung, J., Gelly, S., Houlsby, N.: Big transfer (bit): General visual representation learning. In: Eur. Conf. Comput. Vis., pp. 491–507 (2020). Springer Sun et al. [2021] Sun, X., Ding, H., Zhang, C., Lin, G., Ling, K.-V.: M2iosr: Maximal mutual information open set recognition. arXiv preprint arXiv:2108.02373 (2021) Neal et al. [2018] Neal, L., Olson, M., Fern, X., Wong, W.-K., Li, F.: Open set learning with counterfactual images. In: Eur. Conf. Comput. Vis., pp. 613–628 (2018) Yu et al. [2017] Yu, Y., Qu, W.-Y., Li, N., Guo, Z.: Open-category classification by adversarial sample generation. In: IJCAI, pp. 3357–3363 (2017) Zhou et al. [2021] Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Ding, H., Zhang, C., Lin, G., Ling, K.-V.: M2iosr: Maximal mutual information open set recognition. arXiv preprint arXiv:2108.02373 (2021) Neal et al. [2018] Neal, L., Olson, M., Fern, X., Wong, W.-K., Li, F.: Open set learning with counterfactual images. In: Eur. Conf. Comput. Vis., pp. 613–628 (2018) Yu et al. [2017] Yu, Y., Qu, W.-Y., Li, N., Guo, Z.: Open-category classification by adversarial sample generation. In: IJCAI, pp. 3357–3363 (2017) Zhou et al. [2021] Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Neal, L., Olson, M., Fern, X., Wong, W.-K., Li, F.: Open set learning with counterfactual images. In: Eur. Conf. Comput. Vis., pp. 613–628 (2018) Yu et al. [2017] Yu, Y., Qu, W.-Y., Li, N., Guo, Z.: Open-category classification by adversarial sample generation. In: IJCAI, pp. 3357–3363 (2017) Zhou et al. [2021] Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yu, Y., Qu, W.-Y., Li, N., Guo, Z.: Open-category classification by adversarial sample generation. In: IJCAI, pp. 3357–3363 (2017) Zhou et al. [2021] Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  6. Sun, X., Ding, H., Zhang, C., Lin, G., Ling, K.-V.: M2iosr: Maximal mutual information open set recognition. arXiv preprint arXiv:2108.02373 (2021) Neal et al. [2018] Neal, L., Olson, M., Fern, X., Wong, W.-K., Li, F.: Open set learning with counterfactual images. In: Eur. Conf. Comput. Vis., pp. 613–628 (2018) Yu et al. [2017] Yu, Y., Qu, W.-Y., Li, N., Guo, Z.: Open-category classification by adversarial sample generation. In: IJCAI, pp. 3357–3363 (2017) Zhou et al. [2021] Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Neal, L., Olson, M., Fern, X., Wong, W.-K., Li, F.: Open set learning with counterfactual images. In: Eur. Conf. Comput. Vis., pp. 613–628 (2018) Yu et al. [2017] Yu, Y., Qu, W.-Y., Li, N., Guo, Z.: Open-category classification by adversarial sample generation. In: IJCAI, pp. 3357–3363 (2017) Zhou et al. [2021] Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yu, Y., Qu, W.-Y., Li, N., Guo, Z.: Open-category classification by adversarial sample generation. In: IJCAI, pp. 3357–3363 (2017) Zhou et al. [2021] Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  7. Neal, L., Olson, M., Fern, X., Wong, W.-K., Li, F.: Open set learning with counterfactual images. In: Eur. Conf. Comput. Vis., pp. 613–628 (2018) Yu et al. [2017] Yu, Y., Qu, W.-Y., Li, N., Guo, Z.: Open-category classification by adversarial sample generation. In: IJCAI, pp. 3357–3363 (2017) Zhou et al. [2021] Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yu, Y., Qu, W.-Y., Li, N., Guo, Z.: Open-category classification by adversarial sample generation. In: IJCAI, pp. 3357–3363 (2017) Zhou et al. [2021] Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  8. Yu, Y., Qu, W.-Y., Li, N., Guo, Z.: Open-category classification by adversarial sample generation. In: IJCAI, pp. 3357–3363 (2017) Zhou et al. [2021] Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  9. Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4401–4410 (2021) Zhang et al. [2020] Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  10. Zhang, H., Li, A., Guo, J., Guo, Y.: Hybrid models for open set recognition. In: Eur. Conf. Comput. Vis., pp. 102–117 (2020). Springer Krizhevsky [2009] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  11. Krizhevsky, A.: Learning multiple layers of features from tiny images (2009) Yang et al. [2020] Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  12. Yang, H.-M., Zhang, X.-Y., Yin, F., Yang, Q., Liu, C.-L.: Convolutional prototype network for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Yoshihashi et al. [2019] Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  13. Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 4016–4025 (2019) Zang et al. [2022] Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  14. Zang, Y., Li, W., Zhou, K., Huang, C., Loy, C.C.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022) Bendale and Boult [2015] Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  15. Bendale, A., Boult, T.: Towards open world recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 1893–1902 (2015) Khattak et al. [2022] Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  16. Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117 (2022) Mañas et al. [2022] Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  17. Mañas, O., Rodriguez, P., Ahmadi, S., Nematzadeh, A., Goyal, Y., Agrawal, A.: Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting. arXiv preprint arXiv:2210.07179 (2022) Bulat and Tzimiropoulos [2022] Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  18. Bulat, A., Tzimiropoulos, G.: Language-aware soft prompting for vision & language foundation models. arXiv preprint arXiv:2210.01115 (2022) Ding et al. [2022] Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  19. Ding, K., Wang, Y., Liu, P., Yu, Q., Zhang, H., Xiang, S., Pan, C.: Prompt tuning with soft context sharing for vision-language models. arXiv preprint arXiv:2208.13474 (2022) Xing et al. [2022] Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  20. Xing, Y., Wu, Q., Cheng, D., Zhang, S., Liang, G., Zhang, Y.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022) Júnior et al. [2017] Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  21. Júnior, P.R.M., De Souza, R.M., Werneck, R.d.O., Stein, B.V., Pazinato, D.V., Almeida, W.R., Penatti, O.A., Torres, R.d.S., Rocha, A.: Nearest neighbors distance ratio open-set classifier. Machine Learning 106(3), 359–386 (2017) Rudd et al. [2017] Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  22. Rudd, E.M., Jain, L.P., Scheirer, W.J., Boult, T.E.: The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017) Moon et al. [2022] Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  23. Moon, W., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: Eur. Conf. Comput. Vis., pp. 365–381 (2022). Springer Chen et al. [2020] Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  24. Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., Tian, Y.: Learning open set network with discriminative reciprocal points. In: Eur. Conf. Comput. Vis., pp. 507–522 (2020). Springer Chen et al. [2021] Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  25. Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Huang and Li [2021] Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  26. Huang, R., Li, Y.: Mos: Towards scaling out-of-distribution detection for large semantic space. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 8710–8719 (2021) Petroni et al. [2019] Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  27. Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.: Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463–2473 (2019) Lester et al. [2021] Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  28. Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059 (2021) Shin et al. [2020] Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  29. Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235 (2020) Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  30. Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR Miller [1995] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  31. Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39–41 (1995) Loshchilov and Hutter [2018] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  32. Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Int. Conf. Learn. Represent. (2018) Le and Yang [2015] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  33. Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015) Russakovsky et al. [2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  34. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  35. Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) Kong and Ramanan [2021] Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  36. Kong, S., Ramanan, D.: Opengan: Open-set recognition via open data generation. In: Int. Conf. Comput. Vis., pp. 813–822 (2021) Jiang et al. [2020] Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  37. Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Transactions of the Association for Computational Linguistics 8, 423–438 (2020) Li and Liang [2021] Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  38. Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597 (2021) Liu et al. [2021] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  39. Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021) Poerner et al. [2019] Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  40. Poerner, N., Waltinger, U., Schütze, H.: Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681 (2019) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  41. Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337–2348 (2022) Recht et al. [2019] Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  42. Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? In: International Conference on Machine Learning, pp. 5389–5400 (2019). PMLR Wang et al. [2019] Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  43. Wang, H., Ge, S., Lipton, Z., Xing, E.P.: Learning robust global representations by penalizing local predictive power. Adv. Neural Inform. Process. Syst. 32 (2019) Hendrycks et al. [2021] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  44. Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15262–15271 (2021) Vaze et al. [2022] Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  45. Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: A good closed-set classifier is all you need? In: Int. Conf. Learn. Represent. (2022) Zhou et al. [2022] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  46. Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 16816–16825 (2022) Jia et al. [2021] Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  47. Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916 (2021). PMLR Guo et al. [2021] Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  48. Guo, Y., Camporese, G., Yang, W., Sperduti, A., Ballan, L.: Conditional variational capsule network for open set recognition. In: Int. Conf. Comput. Vis., pp. 103–111 (2021) Yue et al. [2021] Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  49. Yue, Z., Wang, T., Sun, Q., Hua, X.-S., Zhang, H.: Counterfactual zero-shot and open-set visual recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 15404–15414 (2021) Li et al. [2021] Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  50. Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., Hoi, S.C.H.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural Inform. Process. Syst. 34 (2021) Lu et al. [2022] Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  51. Lu, J., Xu, Y., Li, H., Cheng, Z., Niu, Y.: Pmal: Open set recognition via robust prototype mining. AAAI (2022) Liu et al. [2019] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  52. Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 2537–2546 (2019) Fort et al. [2021] Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  53. Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. Adv. Neural Inform. Process. Syst. 34, 7068–7081 (2021) Hendrycks and Gimpel [2017] Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  54. Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. Int. Conf. Learn. Represent. (2017) Chen et al. [2022] Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  55. Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., Chen, H.: Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788 (2022) Sun et al. [2020] Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  56. Sun, X., Yang, Z., Zhang, C., Ling, K.-V., Peng, G.: Conditional gaussian distribution learning for open set recognition. In: IEEE Conf. Comput. Vis. Pattern Recog., pp. 13480–13489 (2020) Salehi et al. [2022] Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022) Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)
  57. Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M., Sabokrou, M., et al.: A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges. Transactions on Machine Learning Research (234) (2022)

Summary

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.